Tunnel deployment instructions
Prepare id and certificateâ
-
Generate a new unique id for the customer's tunnel by running the following in Python:
import uuid
print(uuid.uuid4().hex) -
Download the tunnelproxy-ca.pem file from the AWS secrets manager il-central-1 region from either dev or production AWS account
The following steps will require the generated tunnel id and downloaded tunnelproxy-ca.pem file so make sure to keep them until the deployment is completed
Create a new Cloudflare tunnel for the customerâ
- Go to Cloudflare tunnels dashboard
- Click 'Create a tunnel'
- Choose type 'Cloudflared'
- Enter the tunnel id as the tunnel name and save
- Copy the tunnel access token after the tunnel is created
- Click next and under 'Published applications' tab add the following:
- Subdomain - the tunnel id
- Domain -
legion-tunnel.com - Service type -
http - Service URL -
localhost:8080
- Click 'Complete setup' to finish creating the tunnel
The following steps will require the tunnel access token so make sure to keep it until the deployment is completed
Update Cloudflare Zero Trust policy
- Go to the Backend public IPs list in Cloudflare
- Add the 2 public IP addresses of our backend in the relevant AWS region (if they're not already there) to the list and save the list
Create dedicated customer proxy instanceâ
-
In infra repository's
__main__.pyserver infra file find the list ofproxy_instancesfor the customer's region and add a proxy instance for the customer. The instance name should be the Legion customer id (org_...). -
Merge the change and deploy the infra
-
In AWS portal navigate to EC2 -> Instances under our AWS production account and under the customer's region, and select the tunnel proxy EC2 instance by its name. Note down the instances private DNS name
-
Click 'Connect', switch to 'Session Manager' tab and click 'Connect' to connect to the instance
-
(Optional but recommended: run
bashin the opened terminal to use bash terminal instead of default and more annoying to use ubuntu shell) -
Create a folder for the proxy's files under /usr/proxy by running:
sudo mkdir /usr/proxy
cd /usr/proxy -
Copy the following files to the /usr/proxy folder:
- The mitm_script.py, mitmproxy.service and .env files from the /tunnel_proxy folder in this repository. Make sure to update the values of the .env files (tunnel DNS host taken from tunnel creation step - should be
<tunnel-id>.legion-tunnel.com, client id and secret taken from AWS secret manager il-central-1 region under 'infra_secrets' as TUNNEL_PROXY_CLIENT_ID and TUNNEL_PROXY_CLIENT_SECRET) - The tunnelproxy-ca.pem file (download from AWS secrets manager in the preparation step)
Verify that all files were copied by ensuring
ls -a /usr/proxyshows 4 files.(The fastest way to copy the files is to copy them as text - open them in a text editor on your machine, copy all text, run
sudo nano <filename>in the EC2 instance, right click, paste, save) - The mitm_script.py, mitmproxy.service and .env files from the /tunnel_proxy folder in this repository. Make sure to update the values of the .env files (tunnel DNS host taken from tunnel creation step - should be
-
Prepare the expected folder structure and permissions:
sudo mkdir /usr/proxy/certs
sudo mv /usr/proxy/tunnelproxy-ca.pem /usr/proxy/certs/mitmproxy-ca.pem
sudo chown -R ssm-user:ssm-user . -
Install mitmproxy:
sudo apt update
sudo apt install python3-pip libffi-dev libssl-dev -y
pip3 install --upgrade pip --break-system-packages
pip3 install --no-cache-dir mitmproxy==11.0.2 --break-system-packages -
Configure mitmproxy to run automatically on system boot (in case the VM will restart)
sudo mv mitmproxy.service /etc/systemd/system/mitmproxy.service
sudo systemctl daemon-reexec
sudo systemctl daemon-reload
sudo systemctl enable mitmproxy
sudo systemctl start mitmproxy -
Verify everything is configured correctly by checking service status and ensuring it is running successfully by checking that service status is 'active (running)' written in green. If there's an issue, the errors should be shown in the command output
sudo systemctl status mitmproxy
If service is showing errors or failed to load, there are a few ways to debug the issue:
- Copy the exec command from mitmproxy.service file and run it directly from terminal. (Note: this execution method doesn't load environment variables, so ignored errors related to them)
- View and tail service logs by running
sudo journalctl -u mitmproxy.service -f
After making any changes run sudo systemctl restart mitmproxy to restart the service and run the status command again to see if the issue was resolved.
The following steps will require the EC2 instance private DNS name so make sure to keep it until the deployment is completed
Prepare files to provide the customerâ
-
Prepare the following folder structure to provide the customer:
-
config folder containing:
-
dlp_config.json file, exported from the /settings API for the customer organization. Should contain only a list of rules with 'mask', 'regex' and 'flags' fields. Can be exported by running dev_tools/export_customer_dlp_settings.py in backend repo. Example output:
[
{
"mask": "Clinical Trial",
"regex": "\\b(NCT|nct)\\d{8}\\b",
"flags": ["g"]
}
]Important: for the following rules, regexes must use
(?:^|\\s)\\bas prefix and\\bas suffix:- Credit Card Number
- US Fax Number
- US Phone Number
- Med License
- Med Record
Otherwise they can match random strings in JS files returned through the tunnel and break flows.
-
-
.env file with: (fill in the tunnel access token from tunnel creation step)
TUNNEL_TOKEN=<tunnel-access-token>
DLP_CONFIG_FILE_PATH=/config/dlp_config.jsonOptionally, if customer has internal servers we need to call with HTTPs that use certificate from custom CA, the customer can create a folder with the cer/crt files of the custom CA and add the following to the .env file: (any path is ok as long as it's volume mounted to the container)
EXTERNAL_TRUSTED_CERTS_DIR=/config/custom_caOptionally, if customer upstream requires mutual TLS (client certificate authentication), set the following environment variable and mount the referenced folder to include per-profile certificate/key files:
CLIENT_CERTS_DIR=/config/client_certsExpected structure example:
config/client_certs/<profile-name>/cert.crt
config/client_certs/<profile-name>/key.pem
-
-
Zip the folder and provide the customer with it securely (since it contains access token and certificate with its private key)
Tunnel execution in customer sideâ
-
The tunnel should be run by executing the following for the unzipped folder:
sudo docker login legiontunnel.azurecr.io --username <LegionTunnelAcrCustomerPull-application id> --password <LegionTunnelAcrCustomerPull-application secret>
sudo docker run -d \
--env-file .env \
-v $(pwd)/config:/config \
--name legion-tunnel \
--pull always \
--restart unless-stopped \
legiontunnel.azurecr.io/legion-tunnel:latestNotes:
- Docker installation instructions by OS can be found in the Docker documentation.
- Application ID and secret for LegionTunnelAcrCustomerPull can be found in 1Password under 'LegionTunnelAcrCustomerPull'.
-
After customer successfully ran the docker container verify the container is running correctly by:
- Having the customer run
sudo docker ps -ato verify the container started correctly and is running - Having the customer run
sudo docker logs <container id>(with the container id from the docker ps command) to verify that are no error logs from the container - Checking the tunnel status in Cloudflare tunnels dashboard - it should be marked with a green 'Healthy' state
These checks will verify the tunnel is connected and the container is running, but we have no way to verify the DLP proxy until trying to run an automation through it
- Having the customer run
If there are issues with the tunnel we can ask customer to add VERBOSITY=debug flag to the .env file and restart the container, so that docker logs will show much more verbose data for us to debug the issues.
Enable proxy for autonomous investigationsâ
-
Update MongoDB with the customer's tool to proxy mapping so the relevant tools in autonomous mode will go through the created proxy. Do this by running the
dev_tools/add_proxy.pyfile and specifying:- The customer id
- The relevant tool that needs the proxy (run script more than once for multiple tools)
- Passing the EC2 instance private DNS name as the proxy url, which is the DNS name with 'http://' prefix and 1380 port
- Proxy type 'tunnel'.
For example:
add_proxy(
customer_id="org_customerid",
tool="TheHive",
type="tunnel",
proxy_url="http://ip-172-31-28-15.il-central-1.compute.internal:1380",
)The next autonomous run using a skill from this tool should go through the tunnel.
Verify in tunnel and worker logs that the run was successful, or find and fix issues according to the error logs